Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Med Image Anal ; 84: 102722, 2023 02.
Article in English | MEDLINE | ID: covidwho-2159542

ABSTRACT

Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.


Subject(s)
COVID-19 Testing , COVID-19 , Humans , Consensus , Uncertainty , COVID-19/diagnostic imaging , Tomography, X-Ray Computed
2.
30th Italian Symposium on Advanced Database Systems, SEBD 2022 ; 3194:359-366, 2022.
Article in English | Scopus | ID: covidwho-2026950

ABSTRACT

At the end of 2019 anew coronavirus, SARS-CoV-2, was identified as responsible for the lung infection, now called COVID-19 (coronavirus disease 2019). Since then there has been an exponential growth of infections and at the beginning of March 2020 the WHO declared the epidemic a global emergency. An early diagnosis of those carrying the virus becomes crucial to contain the spread, morbidity and mortality of the pandemic. The definitive diagnosis is made through specific tests, among which imaging tests play an important role in the care path of the patient with suspected or confirmed COVID-19. Patients with serious COVID-19 typically experience viral pneumonia. This paper uses the Multiple Instance Learning paradigm to classify pneumonia X-ray images, considering three different classes: radiographies of healthy people, radiographies of people with bacterial pneumonia and of people with viral pneumonia. The proposed algorithms, which are very fast in practice, appear promising especially if we take into account that no preprocessing technique has been used. © 2022 CEUR-WS. All rights reserved.

3.
1st IEEE International Conference on Smart Technologies and Systems for Next Generation Computing, ICSTSN 2022 ; 2022.
Article in English | Scopus | ID: covidwho-1861112

ABSTRACT

Automatic detection of corona virus infections using CT images is important for rapid examination. The systematic collection of extensive initial training data is difficult. A new conditional generative model called Cycle Generative Adversarial Network (CycleGAN), which is able to record the conditional distribution of individual CT image and synthesis of even higher resolution (512 × 512) and various CT images that exactly match the input conditions. So, it can able to balance the unbalanced dataset by creating synthetic data. The Deep Multiple Instance Learning (DMIL) is applied to train a patch-level classifier, where a Chest CT is viewed as a bag of instances. Then it generates deep instances to the possible infection area to avoid false negative which leads to pandemic. A series of experimental studies show that this algorithm achieves an overall accuracy of 97.4% without augmentation and reaches the accuracy of 98.96% with augmentation using CycleGAN. These advantages make this algorithm as an effective tool to screen the COVID-19 waves. © 2022 IEEE.

4.
Comput Methods Programs Biomed ; 211: 106406, 2021 Nov.
Article in English | MEDLINE | ID: covidwho-1401346

ABSTRACT

BACKGROUND AND OBJECTIVE: Given that the novel coronavirus disease 2019 (COVID-19) has become a pandemic, a method to accurately distinguish COVID-19 from community-acquired pneumonia (CAP) is urgently needed. However, the spatial uncertainty and morphological diversity of COVID-19 lesions in the lungs, and subtle differences with respect to CAP, make differential diagnosis non-trivial. METHODS: We propose a deep represented multiple instance learning (DR-MIL) method to fulfill this task. A 3D volumetric CT scan of one patient is treated as one bag and ten CT slices are selected as the initial instances. For each instance, deep features are extracted from the pre-trained ResNet-50 with fine-tuning and represented as one deep represented instance score (DRIS). Each bag with a DRIS for each initial instance is then input into a citation k-nearest neighbor search to generate the final prediction. A total of 141 COVID-19 and 100 CAP CT scans were used. The performance of DR-MIL is compared with other potential strategies and state-of-the-art models. RESULTS: DR-MIL displayed an accuracy of 95% and an area under curve of 0.943, which were superior to those observed for comparable methods. COVID-19 and CAP exhibited significant differences in both the DRIS and the spatial pattern of lesions (p<0.001). As a means of content-based image retrieval, DR-MIL can identify images used as key instances, references, and citers for visual interpretation. CONCLUSIONS: DR-MIL can effectively represent the deep characteristics of COVID-19 lesions in CT images and accurately distinguish COVID-19 from CAP in a weakly supervised manner. The resulting DRIS is a useful supplement to visual interpretation of the spatial pattern of lesions when screening for COVID-19.


Subject(s)
COVID-19 , Deep Learning , Pneumonia , Humans , Lysergic Acid Diethylamide/analogs & derivatives , Pneumonia/diagnostic imaging , SARS-CoV-2 , Tomography, X-Ray Computed
5.
Med Image Anal ; 72: 102105, 2021 08.
Article in English | MEDLINE | ID: covidwho-1240507

ABSTRACT

Chest computed tomography (CT) based analysis and diagnosis of the Coronavirus Disease 2019 (COVID-19) plays a key role in combating the outbreak of the pandemic that has rapidly spread worldwide. To date, the disease has infected more than 18 million people with over 690k deaths reported. Reverse transcription polymerase chain reaction (RT-PCR) is the current gold standard for clinical diagnosis but may produce false positives; thus, chest CT based diagnosis is considered more viable. However, accurate screening is challenging due to the difficulty in annotation of infected areas, curation of large datasets, and the slight discrepancies between COVID-19 and other viral pneumonia. In this study, we propose an attention-based end-to-end weakly supervised framework for the rapid diagnosis of COVID-19 and bacterial pneumonia based on multiple instance learning (MIL). We further incorporate unsupervised contrastive learning for improved accuracy with attention applied both in spatial and latent contexts, herein we propose Dual Attention Contrastive based MIL (DA-CMIL). DA-CMIL takes as input several patient CT slices (considered as bag of instances) and outputs a single label. Attention based pooling is applied to implicitly select key slices in the latent space, whereas spatial attention learns slice spatial context for interpretable diagnosis. A contrastive loss is applied at the instance level to encode similarity of features from the same patient against representative pooled patient features. Empirical results show that our algorithm achieves an overall accuracy of 98.6% and an AUC of 98.4%. Moreover, ablation studies show the benefit of contrastive learning with MIL.


Subject(s)
COVID-19 , Pneumonia, Viral , Humans , Pandemics , SARS-CoV-2 , Tomography, X-Ray Computed
6.
Med Image Anal ; 69: 101978, 2021 04.
Article in English | MEDLINE | ID: covidwho-1062515

ABSTRACT

How to fast and accurately assess the severity level of COVID-19 is an essential problem, when millions of people are suffering from the pandemic around the world. Currently, the chest CT is regarded as a popular and informative imaging tool for COVID-19 diagnosis. However, we observe that there are two issues - weak annotation and insufficient data that may obstruct automatic COVID-19 severity assessment with CT images. To address these challenges, we propose a novel three-component method, i.e., 1) a deep multiple instance learning component with instance-level attention to jointly classify the bag and also weigh the instances, 2) a bag-level data augmentation component to generate virtual bags by reorganizing high confidential instances, and 3) a self-supervised pretext component to aid the learning process. We have systematically evaluated our method on the CT images of 229 COVID-19 cases, including 50 severe and 179 non-severe cases. Our method could obtain an average accuracy of 95.8%, with 93.6% sensitivity and 96.4% specificity, which outperformed previous works.


Subject(s)
COVID-19/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Deep Learning , Female , Humans , Infant , Infant, Newborn , Male , Middle Aged , SARS-CoV-2 , Severity of Illness Index , Supervised Machine Learning , Tomography, X-Ray Computed , Young Adult
7.
Med Image Anal ; 69: 101975, 2021 04.
Article in English | MEDLINE | ID: covidwho-1039485

ABSTRACT

The outbreak of COVID-19 around the world has caused great pressure to the health care system, and many efforts have been devoted to artificial intelligence (AI)-based analysis of CT and chest X-ray images to help alleviate the shortage of radiologists and improve the diagnosis efficiency. However, only a few works focus on AI-based lung ultrasound (LUS) analysis in spite of its significant role in COVID-19. In this work, we aim to propose a novel method for severity assessment of COVID-19 patients from LUS and clinical information. Great challenges exist regarding the heterogeneous data, multi-modality information, and highly nonlinear mapping. To overcome these challenges, we first propose a dual-level supervised multiple instance learning module (DSA-MIL) to effectively combine the zone-level representations into patient-level representations. Then a novel modality alignment contrastive learning module (MA-CLR) is presented to combine representations of the two modalities, LUS and clinical information, by matching the two spaces while keeping the discriminative features. To train the nonlinear mapping, a staged representation transfer (SRT) strategy is introduced to maximumly leverage the semantic and discriminative information from the training data. We trained the model with LUS data of 233 patients, and validated it with 80 patients. Our method can effectively combine the two modalities and achieve accuracy of 75.0% for 4-level patient severity assessment, and 87.5% for the binary severe/non-severe identification. Besides, our method also provides interpretation of the severity assessment by grading each of the lung zone (with accuracy of 85.28%) and identifying the pathological patterns of each lung zone. Our method has a great potential in real clinical practice for COVID-19 patients, especially for pregnant women and children, in aspects of progress monitoring, prognosis stratification, and patient management.


Subject(s)
COVID-19/diagnostic imaging , Lung/diagnostic imaging , Adolescent , Adult , Aged , Aged, 80 and over , Female , Humans , Machine Learning , Male , Middle Aged , SARS-CoV-2 , Severity of Illness Index , Tomography, X-Ray Computed , Ultrasonography , Young Adult
8.
Front Bioeng Biotechnol ; 8: 898, 2020.
Article in English | MEDLINE | ID: covidwho-732918

ABSTRACT

OBJECTIVES: Coronavirus disease 2019 (COVID-19) is sweeping the globe and has resulted in infections in millions of people. Patients with COVID-19 face a high fatality risk once symptoms worsen; therefore, early identification of severely ill patients can enable early intervention, prevent disease progression, and help reduce mortality. This study aims to develop an artificial intelligence-assisted tool using computed tomography (CT) imaging to predict disease severity and further estimate the risk of developing severe disease in patients suffering from COVID-19. MATERIALS AND METHODS: Initial CT images of 408 confirmed COVID-19 patients were retrospectively collected between January 1, 2020 and March 18, 2020 from hospitals in Honghu and Nanchang. The data of 303 patients in the People's Hospital of Honghu were assigned as the training data, and those of 105 patients in The First Affiliated Hospital of Nanchang University were assigned as the test dataset. A deep learning based-model using multiple instance learning and residual convolutional neural network (ResNet34) was developed and validated. The discrimination ability and prediction accuracy of the model were evaluated using the receiver operating characteristic curve and confusion matrix, respectively. RESULTS: The deep learning-based model had an area under the curve (AUC) of 0.987 (95% confidence interval [CI]: 0.968-1.00) and an accuracy of 97.4% in the training set, whereas it had an AUC of 0.892 (0.828-0.955) and an accuracy of 81.9% in the test set. In the subgroup analysis of patients who had non-severe COVID-19 on admission, the model achieved AUCs of 0.955 (0.884-1.00) and 0.923 (0.864-0.983) and accuracies of 97.0 and 81.6% in the Honghu and Nanchang subgroups, respectively. CONCLUSION: Our deep learning-based model can accurately predict disease severity as well as disease progression in COVID-19 patients using CT imaging, offering promise for guiding clinical treatment.

SELECTION OF CITATIONS
SEARCH DETAIL